Worst-Case Complexity of Smoothing Quadratic Regularization Methods for Non-Lipschitzian Optimization

نویسندگان

  • Wei Bian
  • Xiaojun Chen
چکیده

Abstract. In this paper, we propose a smoothing quadratic regularization (SQR) algorithm for solving a class of nonsmooth nonconvex, perhaps even non-Lipschitzian minimization problems, which has wide applications in statistics and sparse reconstruction. The proposed SQR algorithm is a first order method. At each iteration, the SQR algorithm solves a strongly convex quadratic minimization problem with a diagonal Hessian matrix, which has a simple closed-form solution that is inexpensive to calculate. We show that the worst-case complexity of reaching an ε scaled stationary point is O(ε−2). Moreover, if the objective function is locally Lipschitz continuous, the SQR algorithm with a slightly modified updating scheme for the smoothing parameter and iterate can obtain an ε Clarke stationary point in at most O(ε−3) iterations.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Smoothing Quadratic Regularization Methods for Box Constrained Non-lipschitz Optimization in Image Restoration

Abstract. We propose a smoothing quadratic regularization (SQR) method for solving box constrained optimization problems with a non-Lipschitz regularization term that includes the lp norm (0 < p < 1) of the gradient of the underlying image in the l2-lp problem as a special case. At each iteration of the SQR algorithm, a new iterate is generated by solving a strongly convex quadratic problem wit...

متن کامل

Partially separable convexly-constrained optimization with non-Lipschitzian singularities and its complexity

An adaptive regularization algorithm using high-order models is proposed for partially separable convexly constrained nonlinear optimization problems whose objective function contains non-Lipschitzian `q-norm regularization terms for q ∈ (0, 1). It is shown that the algorithm using an p-th order Taylor model for p odd needs in general at most O( −(p+1)/p) evaluations of the objective function a...

متن کامل

Modified Gauss-Newton scheme with worst case guarantees for global performance

In this paper we suggest a new version of Gauss-Newton method for solving a system of nonlinear equations, which combines the idea of a sharp merit function with the idea of a quadratic regularization. For this scheme we prove general convergence results and, under a natural non-degeneracy assumption, a local quadratic convergence. We analyze the behavior of this scheme on some natural problem ...

متن کامل

LANCS Workshop on Modelling and Solving Complex Optimisation Problems

Towards optimal Newton-type methods for nonconvex smooth optimization Coralia Cartis Coralia.Cartis (at) ed.ac.uk School of Mathematics, Edinburgh University We show that the steepest-descent and Newton methods for unconstrained non-convex optimization, under standard assumptions, may both require a number of iterations and function evaluations arbitrarily close to the steepest-descent’s global...

متن کامل

Quadratic regularization with cubic descent for unconstrained optimization∗

Cubic-regularization and trust-region methods with worst-case first-order complexity O(ε−3/2) and worst-case second-order complexity O(ε−3) have been developed in the last few years. In this paper it is proved that the same complexities are achieved by means of a quadratic-regularization method with a cubic sufficient-descent condition instead of the more usual predicted-reduction based descent...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • SIAM Journal on Optimization

دوره 23  شماره 

صفحات  -

تاریخ انتشار 2013